Recent CLIP-guided 3D optimization methods, e.g., DreamFields and PureCLIPNeRF achieve great success in zero-shot text-guided 3D synthesis. However, due to the scratch training and random initialization without any prior knowledge, these methods usually fail to generate accurate and faithful 3D structures that conform to the corresponding text. In this paper, we make the first attempt to introduce the explicit 3D shape prior to CLIP-guided 3D optimization methods. Specifically, we first generate a high-quality 3D shape from input texts in the text-to-shape stage as the 3D shape prior. We then utilize it as the initialization of a neural radiance field and then optimize it with the full prompt. For the text-to-shape generation, we present a simple yet effective approach that directly bridges the text and image modalities with a powerful text-to-image diffusion model. To narrow the style domain gap between images synthesized by the text-to-image model and shape renderings used to train the image-to-shape generator, we further propose to jointly optimize a learnable text prompt and fine-tune the text-to-image diffusion model for rendering-style image generation. Our method, namely, Dream3D, is capable of generating imaginative 3D content with better visual quality and shape accuracy than state-of-the-art methods.
translated by 谷歌翻译
To reproduce the success of text-to-image (T2I) generation, recent works in text-to-video (T2V) generation employ large-scale text-video dataset for fine-tuning. However, such paradigm is computationally expensive. Humans have the amazing ability to learn new visual concepts from just one single exemplar. We hereby study a new T2V generation problem$\unicode{x2014}$One-Shot Video Generation, where only a single text-video pair is presented for training an open-domain T2V generator. Intuitively, we propose to adapt the T2I diffusion model pretrained on massive image data for T2V generation. We make two key observations: 1) T2I models are able to generate images that align well with the verb terms; 2) extending T2I models to generate multiple images concurrently exhibits surprisingly good content consistency. To further learn continuous motion, we propose Tune-A-Video with a tailored Sparse-Causal Attention, which generates videos from text prompts via an efficient one-shot tuning of pretrained T2I diffusion models. Tune-A-Video is capable of producing temporally-coherent videos over various applications such as change of subject or background, attribute editing, style transfer, demonstrating the versatility and effectiveness of our method.
translated by 谷歌翻译
Reference-based Super-Resolution (Ref-SR) has recently emerged as a promising paradigm to enhance a low-resolution (LR) input image or video by introducing an additional high-resolution (HR) reference image. Existing Ref-SR methods mostly rely on implicit correspondence matching to borrow HR textures from reference images to compensate for the information loss in input images. However, performing local transfer is difficult because of two gaps between input and reference images: the transformation gap (e.g., scale and rotation) and the resolution gap (e.g., HR and LR). To tackle these challenges, we propose C2-Matching in this work, which performs explicit robust matching crossing transformation and resolution. 1) To bridge the transformation gap, we propose a contrastive correspondence network, which learns transformation-robust correspondences using augmented views of the input image. 2) To address the resolution gap, we adopt teacher-student correlation distillation, which distills knowledge from the easier HR-HR matching to guide the more ambiguous LR-HR matching. 3) Finally, we design a dynamic aggregation module to address the potential misalignment issue between input images and reference images. In addition, to faithfully evaluate the performance of Reference-based Image Super-Resolution under a realistic setting, we contribute the Webly-Referenced SR (WR-SR) dataset, mimicking the practical usage scenario. We also extend C2-Matching to Reference-based Video Super-Resolution task, where an image taken in a similar scene serves as the HR reference image. Extensive experiments demonstrate that our proposed C2-Matching significantly outperforms state of the arts on the standard CUFED5 benchmark and also boosts the performance of video SR by incorporating the C2-Matching component into Video SR pipelines.
translated by 谷歌翻译
The recurrent structure is a prevalent framework for the task of video super-resolution, which models the temporal dependency between frames via hidden states. When applied to real-world scenarios with unknown and complex degradations, hidden states tend to contain unpleasant artifacts and propagate them to restored frames. In this circumstance, our analyses show that such artifacts can be largely alleviated when the hidden state is replaced with a cleaner counterpart. Based on the observations, we propose a Hidden State Attention (HSA) module to mitigate artifacts in real-world video super-resolution. Specifically, we first adopt various cheap filters to produce a hidden state pool. For example, Gaussian blur filters are for smoothing artifacts while sharpening filters are for enhancing details. To aggregate a new hidden state that contains fewer artifacts from the hidden state pool, we devise a Selective Cross Attention (SCA) module, in which the attention between input features and each hidden state is calculated. Equipped with HSA, our proposed method, namely FastRealVSR, is able to achieve 2x speedup while obtaining better performance than Real-BasicVSR. Codes will be available at https://github.com/TencentARC/FastRealVSR
translated by 谷歌翻译
The ability to understand and generate similes is an imperative step to realize human-level AI. However, there is still a considerable gap between machine intelligence and human cognition in similes, since deep models based on statistical distribution tend to favour high-frequency similes. Hence, a large-scale symbolic knowledge base of similes is required, as it contributes to the modeling of diverse yet unpopular similes while facilitating additional evaluation and reasoning. To bridge the gap, we propose a novel framework for large-scale simile knowledge base construction, as well as two probabilistic metrics which enable an improved understanding of simile phenomena in natural language. Overall, we construct MAPS-KB, a million-scale probabilistic simile knowledge base, covering 4.3 million triplets over 0.4 million terms from 70 GB corpora. We conduct sufficient experiments to justify the effectiveness and necessity of the methods of our framework. We also apply MAPS-KB on three downstream tasks to achieve state-of-the-art performance, further demonstrating the value of MAPS-KB.
translated by 谷歌翻译
Vector-Quantized (VQ-based) generative models usually consist of two basic components, i.e., VQ tokenizers and generative transformers. Prior research focuses on improving the reconstruction fidelity of VQ tokenizers but rarely examines how the improvement in reconstruction affects the generation ability of generative transformers. In this paper, we surprisingly find that improving the reconstruction fidelity of VQ tokenizers does not necessarily improve the generation. Instead, learning to compress semantic features within VQ tokenizers significantly improves generative transformers' ability to capture textures and structures. We thus highlight two competing objectives of VQ tokenizers for image synthesis: semantic compression and details preservation. Different from previous work that only pursues better details preservation, we propose Semantic-Quantized GAN (SeQ-GAN) with two learning phases to balance the two objectives. In the first phase, we propose a semantic-enhanced perceptual loss for better semantic compression. In the second phase, we fix the encoder and codebook, but enhance and finetune the decoder to achieve better details preservation. The proposed SeQ-GAN greatly improves VQ-based generative models and surpasses the GAN and Diffusion Models on both unconditional and conditional image generation. Our SeQ-GAN (364M) achieves Frechet Inception Distance (FID) of 6.25 and Inception Score (IS) of 140.9 on 256x256 ImageNet generation, a remarkable improvement over VIT-VQGAN (714M), which obtains 11.2 FID and 97.2 IS.
translated by 谷歌翻译
只有单个目标扬声器的语音供参考的单发语音转换(VC)已成为一个热门研究主题。现有作品通常会散布音色,而有关音高,节奏和内容的信息仍然混合在一起。为了进一步删除这些语音组件,有效地执行一声VC,我们采用随机重新采样用于音高和内容编码器,并使用互信息的各种对比对数比率上限和基于梯度反向层的对抗性相互信息学习来确保不同部分在训练过程中仅包含所需的分离表示的潜在空间。 VCTK数据集的实验显示该模型就自然性和智能性方面实现了一声VC的最新性能。此外,我们可以通过语音表示分离分别传递音色,音调和节奏的单发VC的特征。我们的代码,预训练的模型和演示可在https://im1eon.github.io/is2022-Srdvc/上获得。
translated by 谷歌翻译
我们表明,诸如Stylegan和Biggan之类的预训练的生成对抗网络(GAN)可以用作潜在银行,以提高图像超分辨率的性能。尽管大多数现有面向感知的方法试图通过以对抗性损失学习来产生现实的产出,但我们的方法,即生成的潜在银行(GLEAN),通过直接利用预先训练的gan封装的丰富而多样的先验来超越现有实践。但是,与需要在运行时需要昂贵的图像特定优化的普遍的GAN反演方法不同,我们的方法只需要单个前向通行证才能修复。可以轻松地将Glean合并到具有多分辨率Skip连接的简单编码器银行decoder架构中。采用来自不同生成模型的先验,可以将收集到各种类别(例如人的面孔,猫,建筑物和汽车)。我们进一步提出了一个轻巧的Glean,名为Lightglean,该版本仅保留Glean中的关键组成部分。值得注意的是,Lightglean仅由21%的参数和35%的拖鞋组成,同时达到可比的图像质量。我们将方法扩展到不同的任务,包括图像着色和盲图恢复,广泛的实验表明,与现有方法相比,我们提出的模型表现出色。代码和模型可在https://github.com/open-mmlab/mmediting上找到。
translated by 谷歌翻译
盲人恢复通常会遇到各种规模的面孔输入,尤其是在现实世界中。但是,当前的大多数作品都支持特定的规模面,这限制了其在现实情况下的应用能力。在这项工作中,我们提出了一个新颖的尺度感知盲人面部修复框架,名为FaceFormer,该框架将面部特征恢复作为比例感知转换。所提出的面部特征上采样(FFUP)模块基于原始的比例比例动态生成UPSMPLING滤波器,这有助于我们的网络适应任意面部尺度。此外,我们进一步提出了面部特征嵌入(FFE)模块,该模块利用变压器来层次提取面部潜在的多样性和鲁棒性。因此,我们的脸部形式实现了富裕性和稳健性,恢复了面部的面孔,对面部成分具有现实和对称的细节。广泛的实验表明,我们提出的使用合成数据集训练的方法比当前的最新图像更好地推广到天然低质量的图像。
translated by 谷歌翻译
相邻帧的比对被认为是视频超分辨率(VSR)中的重要操作。高级VSR模型,包括最新的VSR变形金刚,通常配备精心设计的对齐模块。但是,自我注意机制的进步可能违反了这种常识。在本文中,我们重新考虑了对齐在VSR变压器中的作用,并进行了几种违反直觉的观察。我们的实验表明:(i)VSR变形金刚可以直接利用来自非对齐视频的多帧信息,并且(ii)现有的对齐方法有时对VSR变形金刚有害。这些观察结果表明,我们可以仅通过删除对齐模块并采用更大的注意力窗口来进一步提高VSR变压器的性能。然而,这样的设计将大大增加计算负担,无法处理大型动议。因此,我们提出了一种称为斑块对齐的新的,有效的对准方法,该方法将图像贴片而不是像素对齐。配备贴片对齐的VSR变形金刚可以在多个基准测试上证明最先进的性能。我们的工作提供了有关如何在VSR中使用多帧信息以及如何为不同网络/数据集选择对齐方法的宝贵见解。代码和模型将在https://github.com/xpixelgroup/rethinkvsralignment上发布。
translated by 谷歌翻译